8 research outputs found

    Enabling Embodied Analogies in Intelligent Music Systems

    Full text link
    The present methodology is aimed at cross-modal machine learning and uses multidisciplinary tools and methods drawn from a broad range of areas and disciplines, including music, systematic musicology, dance, motion capture, human-computer interaction, computational linguistics and audio signal processing. Main tasks include: (1) adapting wisdom-of-the-crowd approaches to embodiment in music and dance performance to create a dataset of music and music lyrics that covers a variety of emotions, (2) applying audio/language-informed machine learning techniques to that dataset to identify automatically the emotional content of the music and the lyrics, and (3) integrating motion capture data from a Vicon system and dancers performing on that music.Comment: 4 page

    Creative Autonomy in a Simple Interactive Music System

    Get PDF
    Interactive music systems always exhibit an amount of autonomy in the creative process. The capacity to generate material that is primary, contextual and novel to the outcome is proposed here as the bare minimum for creative autonomy in these systems. Assumptions are evaluated using Video Interactive VST Orchestra, a system that generates music through sound processing in interplay with a user. The system accepts audio and video live inputs — a camera and a microphone that capture the interplay of a musician, typically. Mapping of the variance in the musician’s physical motion to the sound processing allows identifying salience in the interaction and the system as autonomous. A case study is presented to provide evidence of creative autonomy in this simple, yet highly effective system
    corecore